15 research outputs found
Chimerical dataset creation protocol based on Doddington Zoo : a biometric application with face, eye, and ECG.
Multimodal systems are a workaround to enhance the robustness and effectiveness of biometric systems. A proper multimodal dataset is of the utmost importance to build such systems. The literature presents some multimodal datasets, although, to the best of our knowledge, there are no previous studies combining face, iris/eye, and vital signals such as the Electrocardiogram (ECG). Moreover, there is no methodology to guide the construction and evaluation of a chimeric dataset. Taking that fact into account, we propose to create a chimeric dataset from three modalities in this work: ECG, eye, and face. Based on the Doddington Zoo criteria, we also propose a generic and systematic protocol imposing constraints for the creation of homogeneous chimeric individuals, which allow us to perform a fair and reproducible benchmark. Moreover, we have proposed a multimodal approach for these modalities based on state-of-the-art deep representations built by convolutional neural networks. We conduct the experiments in the open-world verification mode and on two different scenarios (intra-session and inter-session), using three modalities from two datasets: CYBHi (ECG) and FRGC (eye and face). Our multimodal approach achieves impressive decidability of 7.20 ? 0.18, yielding an almost perfect verification system (i.e., Equal Error Rate (EER) of 0.20% ? 0.06) on the intra-session scenario with unknown data. On the inter-session scenario, we achieve a decidability of 7.78 ? 0.78 and an EER of 0.06% ? 0.06. In summary, these figures represent a gain of over 28% in decidability and a reduction over 11% of the EER on the intra-session scenario for unknown data compared to the best-known unimodal approach. Besides, we achieve an improvement greater than 22% in decidability and an EER reduction over 6% in the inter-session scenario
The PREDICTS database: a global database of how local terrestrial biodiversity responds to human impacts
Biodiversity continues to decline in the face of increasing anthropogenic pressures
such as habitat destruction, exploitation, pollution and introduction of
alien species. Existing global databases of species’ threat status or population
time series are dominated by charismatic species. The collation of datasets with
broad taxonomic and biogeographic extents, and that support computation of
a range of biodiversity indicators, is necessary to enable better understanding of
historical declines and to project – and avert – future declines. We describe and
assess a new database of more than 1.6 million samples from 78 countries representing
over 28,000 species, collated from existing spatial comparisons of
local-scale biodiversity exposed to different intensities and types of anthropogenic
pressures, from terrestrial sites around the world. The database contains
measurements taken in 208 (of 814) ecoregions, 13 (of 14) biomes, 25 (of 35)
biodiversity hotspots and 16 (of 17) megadiverse countries. The database contains
more than 1% of the total number of all species described, and more than
1% of the described species within many taxonomic groups – including flowering
plants, gymnosperms, birds, mammals, reptiles, amphibians, beetles, lepidopterans
and hymenopterans. The dataset, which is still being added to, is
therefore already considerably larger and more representative than those used
by previous quantitative models of biodiversity trends and responses. The database
is being assembled as part of the PREDICTS project (Projecting Responses
of Ecological Diversity In Changing Terrestrial Systems – www.predicts.org.uk).
We make site-level summary data available alongside this article. The full database
will be publicly available in 2015
Inter-patient ECG heartbeat classification with temporal VCG optimized by PSO.
Classifying arrhythmias can be a tough task for a human being and automating this task is highly
desirable. Nevertheless fully automatic arrhythmia classification through Electrocardiogram (ECG)
signals is a challenging task when the inter-patient paradigm is considered. For the inter-patient
paradigm, classifiers are evaluated on signals of unknown subjects, resembling the real world scenario.
In this work, we explore a novel ECG representation based on vectorcardiogram (VCG), called temporal
vectorcardiogram (TVCG), along with a complex network for feature extraction. We also fine-tune
the SVM classifier and perform feature selection with a particle swarm optimization (PSO) algorithm.
Results for the inter-patient paradigm show that the proposed method achieves the results comparable
to state-of-the-art in MIT-BIH database (53% of Positive predictive (+P) for the Supraventricular ectopic
beat (S) class and 87.3% of Sensitivity (Se) for the Ventricular ectopic beat (V) class) that TVCG is a richer
representation of the heartbeat and that it could be useful for problems involving the cardiac signal and
pattern recognition
ECG-based heartbeat classification for arrhythmia detection : a survey.
An electrocardiogram (ECG) measures the electric activity of the heart and has been widelyused for detecting heart diseases due to its simplicity and non-invasive nature. By analyzingthe electrical signal of each heartbeat, i.e., the combination of action impulse waveformsproduced by different specialized cardiac tissues found in the heart, it is possible to detectsome of its abnormalities. In the last decades, several works were developed to produceautomatic ECG-based heartbeat classification methods. In this work, we survey the currentstate-of-the-art methods of ECG-based automated abnormalities heartbeat classificationby presenting the ECG signal preprocessing, the heartbeat segmentation techniques, thefeature description methods and the learning algorithms used. In addition, we describesome of the databases used for evaluation of methods indicated by a well-known standarddeveloped by the Association for the Advancement of Medical Instrumentation (AAMI) anddescribed in ANSI/AAMI EC57:1998/(R)2008 (ANSI/AAMI, 2008). Finally, we discuss limitationsand drawbacks of the methods in the literature presenting concluding remarks and futurechallenges, and also we propose an evaluation process workflow to guide authors in futureworks
Evaluating a hierarchical approach for heartbeat classification from ECG.
Several types of arrhythmias that can be rare and harmless, but
may result in serious cardiac issues, and several ECG analysis methods
have been proposed in the literature to automatically classify the various
classes of arrhythmias. Following the Association for the Advancement of
Medical Instrumentation (AAMI) standard, 15 classes of heartbeats can be
hierarchically grouped into five superclasses. In this work, we propose to
employ the hierarchical classification paradigm to five ECG analysis methods
in the literature, and compare their performance with flat classification
paradigm. In our experiments, we use the MIT-BIH Arrhythmia Database and
analyse the use of the hierarchical classification following AAMI standard and
a well-known and established evaluation protocol using five superclasses. The
experimental results showed that the hierarchical classification provided the
highest gross accuracy for most of the methods used in this work and provided
an improvement in classification performance of N and SVEB superclasses
Learning deep off-the-person heart biometrics representations.
Since the beginning of the new millennium, the electrocardiogram (ECG) has been studied as a biometric trait for security systems and other applications. Recently, with devices such as smartphones and tablets, the acquisition of ECG signal in the off-the-person category has made this biometric signal suitable for real scenarios. In this paper, we introduce the usage of deep learning techniques, specifically convolutional networks, for extracting useful representation for heart biometrics recognition. Particularly, we investigate the learning of feature representations for heart biometrics through two sources: on the raw heartbeat signal and on the heartbeat spectrogram. We also introduce heartbeat data augmentation techniques, which are very important to generalization in the context of deep learning approaches. Using the same experimental setup for six methods in the literature, we show that our proposal achieves state-of-the-art results in the two off-the-person publicly available databases
A GPU deep learning metaheuristic based model for time series forecasting.
As the new generation of smart sensors is evolving towards high sampling acquisitions systems, the
amount of information to be handled by learning algorithms has been increasing. The Graphics
Processing Unit (GPU) architecture provides a greener alternative with low energy consumption for mining
big data, bringing the power of thousands of processing cores into a single chip, thus opening a wide
range of possible applications. In this paper (a substantial extension of the short version presented at
REM2016 on April 19?21, Maldives [1]), we design a novel parallel strategy for time series learning, in
which different parts of the time series are evaluated by different threads. The proposed strategy is
inserted inside the core a hybrid metaheuristic model, applied for learning patterns from an important
mini/microgrid forecasting problem, the household electricity demand forecasting. The future smart
cities will surely rely on distributed energy generation, in which citizens should be aware about how
to manage and control their own resources. In this sense, energy disaggregation research will be part
of several typical and useful microgrid applications. Computational results show that the proposed
GPU learning strategy is scalable as the number of training rounds increases, emerging as a promising
deep learning tool to be embedded into smart sensors
Spatial cluster analysis using particle swarm optimization and dispersion function.
Spatial patterns studies are of great interest to the scientific community and the spatial scan statistic is a widely used technique to analyze such patterns. A key point for the construction of methods for detection of irregularly shaped clusters is that, as the geometrical shape has more degrees of freedom, some correction should be employed in order to compensate the increased flexibility. This paper proposed a multi-objective approach to cluster detection problem using the Particle Swarm Optimization technique aggregating a novel penalty function, called dispersion function, allowing only clusters which are subsets of a circular zone of moderate size. Compared to other regularity functions, the multi-objective scan with the dispersion function is faster and suited for the detection of moderately irregularly shaped clusters. An application is presented using statewide data for Chagas? disease in puerperal women in Minas Gerais state, Brazil
Robust automated cardiac arrhythmia detection in ECG beat signals.
Nowadays, millions of people are affected by
heart diseases worldwide, whereas a considerable amount
of them could be aided through an electrocardiogram
(ECG) trace analysis, which involves the study of
arrhythmia impacts on electrocardiogram patterns. In this
work, we carried out the task of automatic arrhythmia
detection in ECG patterns by means of supervised machine
learning techniques, being the main contribution of this
paper to introduce the optimum-path forest (OPF) classifier
to this context. We compared six distance metrics, six
feature extraction algorithms and three classifiers in two
variations of the same dataset, being the performance of the
techniques compared in terms of effectiveness and efficiency.
Although OPF revealed a higher skill on generalizing
data, the support vector machines (SVM)-based
classifier presented the highest accuracy. However, OPF
shown to be more efficient than SVM in terms of the
computational time for both training and test phases
EEG time series learning and classification using a hybrid forecasting model calibrated with GVNS.
Brain activity can be seen as a time series, in particular, electroencephalogram
(EEG) can measure it over a specific time period. In this regard, brain fingerprinting
can be subjected to be learned by machine learning techniques. These models
have been advocated as EEG-based biometric systems. In this study, we apply
a recent Hybrid Focasting Model, which calibrates its if-then fuzzy rules with a
hybrid GVNS metaheuristic algorithm, in order to learn those patterns. Due to
the stochasticity of the VNS procedure, models with different characteristics can be
generated for each individual. Some EEG recordings from 109 volunteers, measured
using a 64-channels EEGs, with 160 HZ of sampling rate, are used as cases of
study. Different forecasting models are calibrated with the GVNS and used for the
classification purpose. New rules for classifying the individuals using forecasting models are introduced. Computational results indicate that the proposed strategy
can be improved and embedded in the future biometric systems